Snakes and their bio-inspired robot counterparts have demonstrated locomotion on a wide range of terrains. However, dynamic vertical climbing is one locomotion strategy that has received little attention in the existing snake robotics literature. We demonstrate a new scansorial gait and robot inspired by the locomotion of the Pacific Lamprey. This new gait allows a robot to steer while climbing on flat, near-vertical surfaces. A reduced-order model is developed and used to explore the relationship between body actuation and vertical and lateral motions of the robot. Trident, the new wall climbing lamprey-inspired robot, demonstrates dynamic climbing on flat vertical surfaces with a peak net vertical stride displacement of 4.1 cm per step. Actuating at 1.3 Hz, Trident attains a vertical climbing speed of 4.8 cm/s (0.09 Bl/s) at specific resistance of 8.3. Trident can also traverse laterally at 9 cm/s (0.17 Bl/s). Moreover, Trident is able to make 14\% longer strides than the Pacific Lamprey when climbing vertically. The computational and experimental results demonstrate that a lamprey-inspired climbing gait coupled with appropriate attachment is a useful climbing strategy for snake robots climbing near vertical surfaces with limited push points.
translated by 谷歌翻译
In this paper, we present the Multi-view Extended Videos with Identities (MEVID) dataset for large-scale, video person re-identification (ReID) in the wild. To our knowledge, MEVID represents the most-varied video person ReID dataset, spanning an extensive indoor and outdoor environment across nine unique dates in a 73-day window, various camera viewpoints, and entity clothing changes. Specifically, we label the identities of 158 unique people wearing 598 outfits taken from 8, 092 tracklets, average length of about 590 frames, seen in 33 camera views from the very large-scale MEVA person activities dataset. While other datasets have more unique identities, MEVID emphasizes a richer set of information about each individual, such as: 4 outfits/identity vs. 2 outfits/identity in CCVID, 33 viewpoints across 17 locations vs. 6 in 5 simulated locations for MTA, and 10 million frames vs. 3 million for LS-VID. Being based on the MEVA video dataset, we also inherit data that is intentionally demographically balanced to the continental United States. To accelerate the annotation process, we developed a semi-automatic annotation framework and GUI that combines state-of-the-art real-time models for object detection, pose estimation, person ReID, and multi-object tracking. We evaluate several state-of-the-art methods on MEVID challenge problems and comprehensively quantify their robustness in terms of changes of outfit, scale, and background location. Our quantitative analysis on the realistic, unique aspects of MEVID shows that there are significant remaining challenges in video person ReID and indicates important directions for future research.
translated by 谷歌翻译
差异隐私被广泛接受为预防ML数据泄漏的事实方法,传统观念表明,它为隐私攻击提供了强烈的保护。但是,现有的语义保证DP专注于会员推理,这可能高估了对手的能力,并且当成员身份本身不敏感时不适用。在本文中,我们得出了针对正式威胁模型下培训数据重建攻击的DP机制的第一个语义保证。我们表明,两种截然不同的隐私会计方法 - Renyi差异隐私和Fisher信息泄漏 - 都提供了针对数据重建攻击的强烈语义保护。
translated by 谷歌翻译
异常气道扩张,称为牵引支气管扩张,是特发性肺纤维化(IPF)的典型特征。体积计算断层扫描(CT)成像捕获IPF中逐渐变细的丢失。我们假设气道异常的自动化量化可以提供IPF疾病程度和严重程度的估算。我们提出了一种自动化计算管道,系统地将气道树木从基于深度学习的气道分割中划分到其裂片和世代分支,从而从胸部CT获得气道结构措施。重要的是,透气阻止通过厚波传播的杂散气道分支的发生,并通过图表搜索去除气道树中的环,克服现有气道骨架算法的限制。在14名健康参与者和14名IPF患者之间比较了透气段(跨空间)和透气曲线曲线之间的逐渐变化。 IPF患者中,Airway interberering显着降低,与健康对照相比,Airway曲线曲调显着增加。差异在下叶中最大标记,符合IPF相关损伤的典型分布。透气是一种开源管道,避免了现有的气道定量算法的限制,并具有临床解释性。自动化气道测量可能具有作为IPF严重程度和疾病程度的新型成像生物标志物。
translated by 谷歌翻译
安全的多方计算(MPC)允许当事方在数据私有的同时对数据进行计算。该功能具有机器学习应用程序的巨大潜力:它促进了对不同政党拥有的私人数据集的机器学习模型的培训,使用另一方的私人数据评估一方的私人模型等。尽管一系列研究实现了机器 - 通过安全MPC学习模型,此类实现尚未成为主流。没有灵活的软件框架“说话”机器学习研究人员和工程师的灵活软件框架的缺乏阻碍了安全MPC的采用。为了促进机器学习中安全MPC的采用,我们提出了Crypten:一个软件框架,该框架通过在现代机器学习框架中常见的抽象来揭示流行的安全MPC原语,例如张量计算,自动分化和模块化神经网络。本文描述了隐秘的设计,并在最新的文本分类,语音识别和图像分类的模型上衡量其性能。我们的基准表明,Crypten的GPU支持和(任意数量)各方之间的高性能通信使其能够在半honest威胁模型下对现代机器学习模型进行有效的私人评估。例如,使用密码的两方可以使用WAV2letter在语音记录中安全预测音素的速度比实时更快。我们希望Crypten能促使在机器学习社区中采用安全MPC。
translated by 谷歌翻译
Remote sensing imagery provides comprehensive views of the Earth, where different sensors collect complementary data at different spatial scales. Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales. Such models overlook scale-specific information in the data. In this paper, we present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales throughout the pretraining process. Scale-MAE pretrains a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution. Scale-MAE encodes the masked image with a standard ViT backbone, and then decodes the masked image through a bandpass filter to reconstruct low/high frequency images at lower/higher scales. We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery. Scale-MAE achieves an average of a $5.0\%$ non-parametric kNN classification improvement across eight remote sensing datasets compared to current state-of-the-art and obtains a $0.9$ mIoU to $3.8$ mIoU improvement on the SpaceNet building segmentation transfer task for a range of evaluation scales.
translated by 谷歌翻译
Traditional screening practices for anxiety and depression pose an impediment to monitoring and treating these conditions effectively. However, recent advances in NLP and speech modelling allow textual, acoustic, and hand-crafted language-based features to jointly form the basis of future mental health screening and condition detection. Speech is a rich and readily available source of insight into an individual's cognitive state and by leveraging different aspects of speech, we can develop new digital biomarkers for depression and anxiety. To this end, we propose a multi-modal system for the screening of depression and anxiety from self-administered speech tasks. The proposed model integrates deep-learned features from audio and text, as well as hand-crafted features that are informed by clinically-validated domain knowledge. We find that augmenting hand-crafted features with deep-learned features improves our overall classification F1 score comparing to a baseline of hand-crafted features alone from 0.58 to 0.63 for depression and from 0.54 to 0.57 for anxiety. The findings of our work suggest that speech-based biomarkers for depression and anxiety hold significant promise in the future of digital health.
translated by 谷歌翻译
This paper addresses the kinodynamic motion planning for non-holonomic robots in dynamic environments with both static and dynamic obstacles -- a challenging problem that lacks a universal solution yet. One of the promising approaches to solve it is decomposing the problem into the smaller sub problems and combining the local solutions into the global one. The crux of any planning method for non-holonomic robots is the generation of motion primitives that generates solutions to local planning sub-problems. In this work we introduce a novel learnable steering function (policy), which takes into account kinodynamic constraints of the robot and both static and dynamic obstacles. This policy is efficiently trained via the policy optimization. Empirically, we show that our steering function generalizes well to unseen problems. We then plug in the trained policy into the sampling-based and lattice-based planners, and evaluate the resultant POLAMP algorithm (Policy Optimization that Learns Adaptive Motion Primitives) in a range of challenging setups that involve a car-like robot operating in the obstacle-rich parking-lot environments. We show that POLAMP is able to plan collision-free kinodynamic trajectories with success rates higher than 92%, when 50 simultaneously moving obstacles populate the environment showing better performance than the state-of-the-art competitors.
translated by 谷歌翻译
Most deep-learning-based continuous sign language recognition (CSLR) models share a similar backbone consisting of a visual module, a sequential module, and an alignment module. However, due to limited training samples, a connectionist temporal classification loss may not train such CSLR backbones sufficiently. In this work, we propose three auxiliary tasks to enhance the CSLR backbones. The first task enhances the visual module, which is sensitive to the insufficient training problem, from the perspective of consistency. Specifically, since the information of sign languages is mainly included in signers' facial expressions and hand movements, a keypoint-guided spatial attention module is developed to enforce the visual module to focus on informative regions, i.e., spatial attention consistency. Second, noticing that both the output features of the visual and sequential modules represent the same sentence, to better exploit the backbone's power, a sentence embedding consistency constraint is imposed between the visual and sequential modules to enhance the representation power of both features. We name the CSLR model trained with the above auxiliary tasks as consistency-enhanced CSLR, which performs well on signer-dependent datasets in which all signers appear during both training and testing. To make it more robust for the signer-independent setting, a signer removal module based on feature disentanglement is further proposed to remove signer information from the backbone. Extensive ablation studies are conducted to validate the effectiveness of these auxiliary tasks. More remarkably, with a transformer-based backbone, our model achieves state-of-the-art or competitive performance on five benchmarks, PHOENIX-2014, PHOENIX-2014-T, PHOENIX-2014-SI, CSL, and CSL-Daily.
translated by 谷歌翻译
The xView2 competition and xBD dataset spurred significant advancements in overhead building damage detection, but the competition's pixel level scoring can lead to reduced solution performance in areas with tight clusters of buildings or uninformative context. We seek to advance automatic building damage assessment for disaster relief by proposing an auxiliary challenge to the original xView2 competition. This new challenge involves a new dataset and metrics indicating solution performance when damage is more local and limited than in xBD. Our challenge measures a network's ability to identify individual buildings and their damage level without excessive reliance on the buildings' surroundings. Methods that succeed on this challenge will provide more fine-grained, precise damage information than original xView2 solutions. The best-performing xView2 networks' performances dropped noticeably in our new limited/local damage detection task. The common causes of failure observed are that (1) building objects and their classifications are not separated well, and (2) when they are, the classification is strongly biased by surrounding buildings and other damage context. Thus, we release our augmented version of the dataset with additional object-level scoring metrics https://gitlab.kitware.com/dennis.melamed/xfbd to test independence and separability of building objects, alongside the pixel-level performance metrics of the original competition. We also experiment with new baseline models which improve independence and separability of building damage predictions. Our results indicate that building damage detection is not a fully-solved problem, and we invite others to use and build on our dataset augmentations and metrics.
translated by 谷歌翻译